47 research outputs found
Factorization in Formal Languages
We consider several novel aspects of unique factorization in formal
languages. We reprove the familiar fact that the set uf(L) of words having
unique factorization into elements of L is regular if L is regular, and from
this deduce an quadratic upper and lower bound on the length of the shortest
word not in uf(L). We observe that uf(L) need not be context-free if L is
context-free.
Next, we consider variations on unique factorization. We define a notion of
"semi-unique" factorization, where every factorization has the same number of
terms, and show that, if L is regular or even finite, the set of words having
such a factorization need not be context-free. Finally, we consider additional
variations, such as unique factorization "up to permutation" and "up to
subset"
Optimal Reachability in Divergent Weighted Timed Games
Weighted timed games are played by two players on a timed automaton equipped
with weights: one player wants to minimise the accumulated weight while
reaching a target, while the other has an opposite objective. Used in a
reactive synthesis perspective, this quantitative extension of timed games
allows one to measure the quality of controllers. Weighted timed games are
notoriously difficult and quickly undecidable, even when restricted to
non-negative weights. Decidability results exist for subclasses of one-clock
games, and for a subclass with non-negative weights defined by a semantical
restriction on the weights of cycles. In this work, we introduce the class of
divergent weighted timed games as a generalisation of this semantical
restriction to arbitrary weights. We show how to compute their optimal value,
yielding the first decidable class of weighted timed games with negative
weights and an arbitrary number of clocks. In addition, we prove that
divergence can be decided in polynomial space. Last, we prove that for untimed
games, this restriction yields a class of games for which the value can be
computed in polynomial time
Credimus
We believe that economic design and computational complexity---while already
important to each other---should become even more important to each other with
each passing year. But for that to happen, experts in on the one hand such
areas as social choice, economics, and political science and on the other hand
computational complexity will have to better understand each other's
worldviews.
This article, written by two complexity theorists who also work in
computational social choice theory, focuses on one direction of that process by
presenting a brief overview of how most computational complexity theorists view
the world. Although our immediate motivation is to make the lens through which
complexity theorists see the world be better understood by those in the social
sciences, we also feel that even within computer science it is very important
for nontheoreticians to understand how theoreticians think, just as it is
equally important within computer science for theoreticians to understand how
nontheoreticians think
The accuracy of breast volume measurement methods: a systematic review
Breast volume is a key metric in breast surgery and there are a number of different methods which measure it. However, a lack of knowledge regarding a methodâs accuracy and comparability has made it difficult to establish a clinical standard. We have performed a systematic review of the literature to examine the various techniques for measurement of breast volume and to assess their accuracy and usefulness in clinical practice. Each of the fifteen studies we identified had more than ten live participants and assessed volume measurement accuracy using a gold-standard based on the volume, or mass, of a mastectomy specimen. Many of the studies from this review report large (> 200 ml) uncertainty in breast volume and many fail to assess measurement accuracy using appropriate statistical tools. Of the methods assessed, MRI scanning consistently demonstrated the highest accuracy with three studies reporting errors lower than 10% for small (250 ml), medium (500 ml) and large (1,000 ml) breasts. However, as a high-cost, non-routine assessment other methods may be more appropriate
Computing with cells: membrane systems - some complexity issues.
Membrane computing is a branch of natural computing which abstracts computing models from the structure and the functioning of the living cell. The main ingredients of membrane systems, called P systems, are (i) the membrane structure, which consists of a hierarchical arrangements of membranes which delimit compartments where (ii) multisets of symbols, called objects, evolve according to (iii) sets of rules which are localised and associated with compartments. By using the rules in a nondeterministic/deterministic maximally parallel manner, transitions between the system configurations can be obtained. A sequence of transitions is a computation of how the system is evolving. Various ways of controlling the transfer of objects from one membrane to another and applying the rules, as well as possibilities to dissolve, divide or create membranes have been studied. Membrane systems have a great potential for implementing massively concurrent systems in an efficient way that would allow us to solve currently intractable problems once future biotechnology gives way to a practical bio-realization. In this paper we survey some interesting and fundamental complexity issues such as universality vs. nonuniversality, determinism vs. nondeterminism, membrane and alphabet size hierarchies, characterizations of context-sensitive languages and other language classes and various notions of parallelism
Space Complexity of the Directed Reachability Problem over Surface-Embedded Graphs
The graph reachability problem, the computational task of deciding whether there is a path between two given nodes in a graph is the canonical problem for studying space bounded computations. Three central open questions regarding the space complexity of the reachabil-ity problem over directed graphs are: (1) improving Savitchâs O(log2 n) space bound, (2) designing a polynomial-time algorithm with O(n1â) space bound, and (3) designing an unambiguous non-deterministic log-space (UL) algorithm. These are well-known open questions in complex-ity theory, and solving any one of them will be a major breakthrough. We will discuss some of the recent progress reported on these questions for certain subclasses of surface-embedded directed graphs
bk-complete problems and greediness
Kintala and Fischer [7] defined the limited nondeterminism hierarchy within NP, the so called b hierarchy. bk is the class of languages recognized by polynomial time bounded Turing machines that make at most O(logk n) nondeterministic moves, where n is the length of the input. It has been conjectured that "by restricting the amount of nondeterminism in NP-complete problems, we do not seem to obtain complete problems for bk [4]". We demonstrate that this statement is incorrect under what seems to us to be the natural interpretation of the term "restricting the amount of nondeterminism". We develop the concept of limited nondeterminism-preserving reductions, and obtain complete problems for bk by restricting the amount of nondeterminism in NP-complete problems. We also discuss the connections between b hierarchy completeness and greedy algorithms; we show that using greediness we can define many complete problems for b